${\bf \large \cal Hovhannes \ Grigoryan}\\ {\rm \small New \ York, \ NY}$
We study those features of Bose-Einstein statistics that lead to condensation, even in the absence of interaction between bosons. We start by finding all possible single particle states of a 3D harmonic trap bounded by the energy cutoff, and discuss various ways to count the degeneracy for each energy level. We also discuss the difference between partition functions of classical and quantum particles. We study the example of five bosons bounded in a harmonic trap, and find the dependence of condensate fraction on temperature by directly passing through all the states. We extend this to arbitrary number of trapped bosons, using canonical, and grand-canonical formalisms [1].
- Non-interacting ideal bosons
- Five boson bounded in a 3D harmonic trap
- Analytic approach: bosons in canonical ensemble
- Trapped bosons in grand canonical ensemble
- Large-N limit in grand canonical ensemble
- Appendix: Gas of photons
- Appendix: Oscillators and strings
[1] We closely follow the outline of lectures by W. Krauth, et al., "Statistical Mechanics: Algorithms and Computations". OUP Oxford, 2006
[2] Marijn A.M, et al., "The Gibbs Paradox and the Distinguishability of Identical Particles ", AJP Vol 79, (2011), https://arxiv.org/abs/1012.4111v2
[3] R. Kubo, "Statistical Mechanics: An Advanced Course with Problems and Solutions", North-Holland Pub. Co (1971)
%%html
<style>
body, p, div.rendered_html {
color: black;
font-family: "Times Roman", sans-serif;
font-size: 12pt;
}
</style>
import sys, os
import warnings
sys.path.append(os.path.abspath(os.path.join('..')))
warnings.filterwarnings('ignore')
import random
import numpy as np
import pandas as pd
import cmath
from itertools import combinations, product
from collections import OrderedDict
from numba import njit, jit
from sympy import series, symbols, Function, Sum, Product, Poly
import matplotlib.pyplot as plt
from matplotlib import animation, rc, cm
from mpl_toolkits.mplot3d import axes3d, Axes3D
from utils.make_media import *
from utils.html_converter import html_converter
from IPython.display import HTML, Image, clear_output
%precision 20
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
plt.rcParams['figure.figsize'] = (14,8)
plt.style.use('fivethirtyeight')
rc('animation', html='html5')
Non-interacting bosons is the only system in physics that can undergo a phase transition without mutual interactions between its components. Let us enumerate the energy eigenstates of a single 3D boson in an harmonic trap by the following program. For simplicity, assume all spring constants are one ($\omega_{x,y,z}=1$), and set the energy of the ground state equal to zero (that is measure energy relative to zero-point energy $\hbar \omega_{x,y,z}/2=1/2$). In this case: $E_{n_x, n_y, n_z} = n_x + n_y + n_z$.
@njit
def energy_states(Emax):
States = []
for E_x in range(Emax):
for E_y in range(Emax):
for E_z in range(Emax):
States.append(((E_x + E_y + E_z), (E_x, E_y, E_z)))
States.sort()
states = [States[k][:] for k in range(Emax)]
energies = [States[k][0] for k in range(Emax)]
distinct_energies = list(set(energies))
return states, energies, distinct_energies
states, energies, distinct_energies = energy_states(Emax=35)
print("k, E, Ex, Ey. Ex")
for k, s in enumerate(states):
print(f"{k}, {s[0]}, {s[1]}")
These 35 states represent different single particle states with energies bounded by $4$. In other words, we have $34$ different types of particles (and no particle state) with energies smaller than $4$. For example, $\sigma_{21}$ denotes a particle with quantum numbers $(0, 1, 3)$.
@njit
def naive_degeneracy(Emax):
degen = np.zeros(Emax+1)
for E_x in range(Emax+1):
for E_y in range(Emax+1):
for E_z in range(Emax+1):
E = (E_x + E_y + E_z)
if E <= Emax:
degen[E] += 1
return degen
naive_degeneracy(Emax=20).astype(np.int)
How many particles have energy E? The degeneracy of the energy level $E \sim n$ is
$$\mathcal{N}(E) = \frac{(E+1)(E+2)}{2}.$$This is essentially the number of ways one can place $E$ indistinguishable balls into $d=3$ distinct bins, $E+d-1 \choose d-1$ = $E+2 \choose 2$. The bins are distinct in this sense: let $E=2$, $d=2$, then the following $3$ partitions are counted as different:
$$(0,2), (1,1), (2,0)$$In a more formal approach, the number of states of energy $E$ can be determined by first observing that:
$$\mathcal{N}(E)=\sum_{E_x=0}^{E}\sum_{E_y=0}^{E}\sum_{E_z=0}^{E}\delta_{(E_x+E_y+E_z),E},$$where $\delta_{j,k}$ is the Kronecker delta, and then moving into continuous limit, exchanging Kronecker delta with
$$\delta_{j,k}\rightarrow\delta(j-k) =\int_{-\pi}^\pi \frac{\text{d}\lambda}{2\pi}e^{i(j-k)\lambda} \implies$$ $$\mathcal{N}(E)=\int_{-\pi}^\pi \frac{\text{d}\lambda}{2\pi}e^{-i E\lambda}\left(\sum_{E_x=0}^{E}e^{i E_x\lambda}\right)^3 = \int_{-\pi}^\pi \frac{\text{d}\lambda}{2\pi}e^{-i E\lambda}\left[\frac{1-e^{i\lambda (E+1)}}{1-e^{i\lambda}}\right]^3 \\= \frac{1}{2\pi i}\oint_{\mathcal{C}}\frac{\text{d}z}{z^{E+1}}\left[\frac{1-z^{E+1}}{1-z}\right]^3 = \frac{1}{2\pi i}\oint_{\mathcal{C}}\frac{\text{d}z}{z^{E+1}}\left[\frac{1}{2}\sum_{i=0}^{\infty}(i+1)(i+2)z^i \right]\left(1-z^{E+1}\right)^3 = \frac{(E+1)(E+2)}{2},$$where $z=e^{i\lambda}$. The integration range corresponds to a circular contour $\mathcal{C}$ of radius 1 centered at 0 at the complex plane. Using the residue theorem, this integral can be evaluated by determining the coefficient of the $z^{-1}$ term in the Laurent series of $\frac{1}{z^{n+1}}\left[\frac{1-z^{n+1}}{1-z}\right]^3$.
Below is a way to check that: $1/(1-z)^3 = \left(\sum_{i=0}^{\infty}z^i\right)^3 = \frac{1}{2}\sum_{i=0}^{\infty}(i+1)(i+2)z^i$
x, E, i, k = symbols("x, E, i, k", positive=True, real=True)
f = Function("f")
def f(x, E):
a = Sum(x ** i, (i, 0, E)).doit()
b = a ** 3
return b.doit()
E = 15
series(f(x, E), x, 0, E)
The generating function that gives the above degeneracy for distinct bosons with energy $E$ can be generalized to arbitrary number of dimensions. For $d$-dimensional harmonic oscillator, the degeneracy can be found through the following generating function:
$$f(x) = (1+x+x^2+x^3+\dots +x^{E})^d = \left[\frac{1 - x^{E+1}}{1 - x}\right]^d $$Again, the ordering of terms in the sum, $E = E_1 + \dots + E_d$, is relevant. The degeneracy for energy $E$ can be extracted from the derivative (essentially, similar to complex integration given above, if we recall Cauchy's theorem):
$$\frac{1}{E!}\left(\frac{d^E f(x)}{d x^E}\right)_{x=0}$$def f(x, d, E):
a = Sum(x ** i, (i, 0, E)).doit()
b = a ** d
return b.doit()
E = 15
d = 3
series(f(x, d, E), x, 0, E)
For arbitrary number of independent bosonic oscillations (spatial d-dimensions), the number of ways to partition $E$ into $d$ directions is $E+d-1 \choose d-1$ or:
$$\mathcal{N}(E) = \frac{(E+d-1)!}{E! (d-1)!}.$$import operator as op
from functools import reduce
def n_choose_k(n, k):
k = min(k, n-k)
numer = reduce(op.mul, range(n, n-k, -1), 1)
denom = reduce(op.mul, range(1, k+1), 1)
return int(numer / denom)
E = 4
d = 9
n_choose_k(n=E+d-1, k=d-1)
Compare this with the code in the previous subsection
series(f(x, d, E), x, 0, E+1)
The order of the terms in the sum is irrelevant. In this case, the generating function is:
$$f(x) = \prod_{k=1}^{\infty}\left[\frac{1}{1 - x^k}\right] = \sum_{n=0}^{\infty}p(n) x^n $$Here $p(n)$ is the number of distinct ways (order is irrelevant) to represent positive integer $n$ in terms of the sum of arbitrary number of positive integers. For example, if $n=4$, we have $p(4)=5$, and the five partitions are:
$$\{1+1+1+1\}, \ \ \{1+1+2\}, \ \ \{1+3\}, \ \ \{2+2\}, \ \ \{4\}$$This asymptotic formula obtained by G. H. Hardy and Ramanujan in 1918 is:
$$ p(n) \sim \frac{1}{4 n\sqrt{3}}\exp\left(\pi \sqrt{\frac{2 n}{3}} \right) \ , \ \ \ n \to \infty $$def f(x, n, Max=100):
return Product(1/(1 - x ** i), (i, 1, Max)).doit()
n = 15
fx = series(f(x, n), x, 0, n+1).removeO()
pn = Poly(fx, x).coeffs()[::-1]
np.array(pn)
The number $p_k(n)$ of partitions of $n$ into exactly $k$ distinct positive integers is determined from the following partition function:
$$f(x) = x^k\prod_{j=1}^{k}\left[\frac{1}{1 - x^j}\right] = \sum_{n=0}^{\infty}p_k(n) x^n $$It should be clear that $p(n) = \sum_{j=1}^n p_j(n)$. On the other hand, the number of partitions of $n$ in which the largest part has size $k$, is determined from:
$$g(x) = \prod_{j=1}^{k}\left[\frac{1}{1 - x^j}\right] = \sum_{n=0}^{\infty}p_{\leq k}(n) x^n, $$where $p_{\leq k}(n) = \sum_{j=1}^k p_j(n) $.
def f(x, k):
return (x ** k) * Product(1/(1 - x ** i), (i, 1, k)).doit()
def p(n, k, Max = 100):
fx = series(f(x, k), x, 0, Max+1).removeO()
pnk = Poly(fx, x).coeffs()[::-1]
return pnk[n-k]
def pk(n, k, Max = 100):
# can also be expressed through \sum_{j=1}^k p(n,j)
fx = series(f(x, k), x, 0, Max+1).removeO()
pnk = Poly(fx, x).coeffs()[::-1]
return pnk[n]
n = 9
print(f"p(n, k) = num of partitions of n into exactly k distinct parts")
for k in range(1, n+1):
print(f"p({n}, {k}) =", p(n, k))
print(f"\np(n, <= k) = num of partitions of n into at most k distinct parts")
for k in range(1, n+1):
print(f"p({n}, <= {k}) =", pk(n, k))
In d-dimensions the degeneracy of energy level $E$ will be $E+d-1 \choose d-1$, while the number of distinct (unordered) states will be determined by restricted partition $p_{\leq d}(E)$.
The code below does the following:
@jit
def state_info(E, n=3):
""" Finds states and their partial and full degeneracies.
params:
E: (integer) max energy n oscillators can have: E_1 + E_2 + ... + E_n = e for e = 0, ..., E
n: (integer) number of independent oscillator dimensions
returns: pandas dataframe of energy_dict s.t.
energy_dict[e] -> [ OrderedDict[(distinct state, state degeneracy), ... , (distinct state, state degeneracy)],
total number of distinct states,
total degeneracy ]
"""
energy_dict = OrderedDict()
# cartesian product
for s in product(range(E + 1), repeat=n):
E = sum(s)
state = list(s)
state.sort()
state = tuple(state)
if E not in energy_dict:
states = OrderedDict({state: 1})
energy_dict[E] = [states, 1, 1]
else:
if state not in energy_dict[E][0]:
energy_dict[E][0][state] = 1
energy_dict[E][1] += 1
else:
energy_dict[E][0][state] += 1
energy_dict[E][2] += 1
for k, v in energy_dict.items():
v[0] = sorted(v[0].items())
pd.set_option('display.max_colwidth', 0)
energy_df = pd.DataFrame(energy_dict).T
energy_df.columns = ["states", "distinct_states", "degeneracy"]
energy_df.index.name = "E"
return energy_df
%%time
energy_df = state_info(E=10, n=5)
energy_df.head(11)
Given a state of energy $E$, we want to get an occupation number representation of the same state:
$$(\epsilon_1, \epsilon_2, \dots, \epsilon_n) \to (n_0, n_1, n_2, \dots, n_E)$$where $n_k$ is the number of particles with energy $k$. For instance, for $E=9$ and $n=3$
$$ (0, 2, 7) \to (1, 0, 1, 0, 0, 0, 0, 1, 0, 0) $$@jit
def convert(state, Es):
ns = tuple()
for i in range(Es + 1):
ns += (state.count(i),)
return ns
@jit
def occup_repr(E, n):
energy_df = state_info(E, n)
ns = []
ms = []
for s in energy_df["states"][E]:
n = convert(s[0], E)
m = s[1]
ns.append(n)
ms.append(m)
pd.set_option('display.max_rows', 1000)
pd.set_option('display.max_column', 1000)
df = pd.DataFrame(ns).T.iloc[::-1,:]
df.index.name = "Levels"
df.columns = ms
return df.replace(0, "-")
E=9
n=3
orep = occup_repr(E, n)
orep
data = orep.replace("-", np.nan).values[:-1,:].ravel()
plt.hist(data, bins=n)
plt.show()
n = 6
E = 12
energy_df = state_info(E, n)
list_all_states = []
list_all_distinct_states = []
#for E in range(E+1):
for s, m in energy_df["states"][E]:
ls = list(s * m)
ls2 = list(s)
list_all_states += ls
list_all_distinct_states += ls2
ns = np.arange(E)
c = 1.0/np.sum(np.exp(-n*ns/E))
plt.plot(ns, c * np.exp(-n*ns/E), label="$exp(-n\epsilon_n/E)$")
plt.hist(list_all_states, bins=E, density=True, color='r', alpha=0.3, label="all states")
plt.hist(list_all_distinct_states, bins=E, density=True, color='g', alpha=0.3, label="distinct states")
plt.legend(loc="upper right")
plt.xlabel("$\epsilon_n$ (energy of a single oscillator)")
plt.ylabel("density of state distribution")
plt.title(f"n={n} oscillators with energy E={E}")
#plt.ylim(0,0.1)
plt.show()
plt.plot(ns, c * np.exp(-n*ns/E), label="$exp(-n\epsilon_n/E)$")
plt.hist(list_all_states, bins=E, density=True, color='r', alpha=0.3, label="all states")
plt.hist(list_all_distinct_states, bins=E, density=True, color='g', alpha=0.3, label="distinct states")
plt.legend(loc="upper right")
plt.xlabel("$\epsilon_n$ (energy of a single oscillator)")
plt.ylabel("density of state distribution")
plt.title(f"n={n} oscillators with energy E={E}")
plt.xlim(6, 12)
plt.ylim(0, 0.01)
plt.show()
We observe that compared to the density of all states, the density of distinct states (bosons) shows relative increase in preference for states with more bosons occupying higher energy states (this is why there is also increase in the number of empty states).
Consider single particle states (in a 3D harmonic trap) with the following set of possible quantum numbers:
- $\sigma, E, (E_x E_y,E_z)$
- 0, 0, (0, 0, 0)
- 1, 1, (0, 0, 1)
- 2, 1, (0, 1, 0)
- 3, 1, (1, 0, 0)
- 4, 2, (0, 0, 2)
- 5, 2, (0, 1, 1)
- 6, 2, (0, 2, 0)
- 7, 2, (1, 0, 1)
- 8, 2, (1, 1, 0)
- 9, 2, (2, 0, 0)
We have 10 different single particle states labeled by $\sigma$, and 3 possible energies. The single particle partition function is:
$$Z_1 = \sum_{\sigma \in \{\sigma_0, \dots, \sigma_9\}}e^{-\beta E(\sigma)} = 1 + 3 e^{-\beta} + 6 e^{-2\beta},$$Two particles can be in either of the following $100$ states:
$$E_1 E_2 = \{00: 1, \ \ 01: 3, \ \ 10: 3, \ \ 11: 9, \ \ 20: 6, \ \ 02: 6, \ \ 21: 18, \ \ 12: 18, \ \ 22: 36\}$$The partition function in case of distinguishable particles is:
$$Z^d_2 = \sum_{\sigma, \sigma' \in \{\sigma_0, \dots, \sigma_9\}}e^{-\beta E(\sigma, \sigma')} = 1 + 6 e^{-\beta} + 21 e^{-2\beta} + 36 e^{-3\beta} + 36 e^{-4\beta} = Z^2_1$$Gibb's paradox (distinguishability of identical particles): If we use similar logic to derive the partition function for an ideal gas, we will face a contradiction. The entropy would not be extensive, and we may violate the 2nd law of thermodynamics. This is known as the Gibb's paradox. Indeed, place a divider in the middle of a box filled with an ideal gas. When the divider is removed, thermodynamic entropy remains the same, because macroscopic properties of the gases do not change, while the calculated statistical entropy of a system would increase. If initial multiplicity is $X^{2 N}$, and final multiplicity is $(2 X)^{2 N}$, then $\Delta S = 2 k_B N \log(2)$. Similarly, entropy can be decreased by the same amount if we put the divider back. Since this can be done without any other change in the environment, this would violate the 2nd law of thermodynamics. To "fix" this problem, the multiplicity or partition function is divided by $N!$, which un-counts states related by particle interchange. Now, the initial multiplicity is $X^{2 N}/(N!)^2$, and final multiplicity is $(2 X)^{2 N}/(2 N)!$, then $\Delta S \to 0$ in the $N \to \infty$ limit. As a result, the "fixed" partition function for $N$ particle state should be: $Z^d_N =Z^N_1/N!$. The $1/N!$ helps to recover correct classical thermodynamics, it does not arise from considerations of real distinguishability.
There are other opinions on the subject, for example, as discussed in Ref. [2], classical particles are always distinguishable by their positions and trajectories, and therefore division by $N!$ is not necessary. On the other hand, identical quantum particles are indistinguishable from the start, and the factor $N!$ is not required. In Ref. [2] authors accept that identical classical particles are distinguishable and that permutation of two of them leads to a different microstate. Classical particles can be identified over time by their different histories. Permutation of two identical classical particles produces a different microstate.
This emerges naturally if we consider classical particles independent and indistinguishable within the grand-canonical formalism shown below. However, we should keep in mind that the grand-canonical partition function is evaluated at a saddle point of canonical partition function (in the large-$N$ limit). If we still treat classical particles as distinguishable but microstates (where particles occupy the same state) indistinguishable, the approximation becomes more coarse, and only valid in cases where particle number is smaller than number of energy states, and temperature or density are such that the number of states where each level is occupied by a single particle is much larger than the number states where two or more particles occupy the same state. In particular, for an ideal gas at high temperatures and low densities it is unlikely to have more than one particle occupying the same state. Moreover, in the classical limit the energy spectrum becomes continuous, and for a continuous spectrum the probability that two particles are in exactly the same state is zero, and so the approximation is exact.
In quantum statistics there is no over-counting. If, for example, the system contains 2 oscillators with energy cutoff 1, the only 3 possible many-body states are:
In case of quantum particles indistinguishability applies to quantum states, not to particles. Therefore, the particle concept in quantum many-body system is ill defined. This is why the partition function for this system is just:
$$Z_2 = 1 + e^{-\beta \epsilon} + e^{-2\beta \epsilon}.$$In our example with harmonic trap, because each particle is described by 3 quantum numbers, there are effectively $10$ particle states: $\sigma_0, \dots, \sigma_9$, and 2-particle partition function in case of indistinguishable particles comes from the $\frac{(10 + 2 - 1)!}{2!(10-1)!} = 55$ states,
$$E_1 E_2 = \{00: 1, \ \ 01: 3, \ \ 11: 6, \ \ 02: 6, \ \ 12: 18, \ \ 22: 21\}$$Therefore, the partition function is:
$$Z^{id}_2 = \sum_{0 \leq \sigma \leq \sigma' \leq 9}e^{-\beta E(\sigma, \sigma')} = 1 + 3 e^{-\beta} + 12 e^{-2\beta} + 18 e^{-3\beta} + 21 e^{-4\beta} \neq Z^2_1/2!$$In terms of occupation numbers
$$Z^{id}_2 = \sum_{n_0=0}^2 \dots \sum_{n_{9}=0}^2 e^{-\beta (n_0 E_0 + \dots + n_9 E_9)}\delta_{(n_0 + \dots + n_9), 2} = \int^{\pi}_{-\pi} \frac{\text{d}\lambda}{2\pi}e^{-2 i \lambda}\prod_{E=0}^{2}\left(1 + e^{-\beta E + i \lambda} + e^{-2\beta E + 2 i \lambda} \right)^{\mathcal{N}(E)} \\= \int^{\pi}_{-\pi} \frac{\text{d}\lambda}{2\pi}e^{-2 i \lambda}\left(1 + e^{i \lambda} + e^{2 i \lambda} \right)\left(1 + e^{-\beta + i \lambda} + e^{-2\beta + 2 i \lambda} \right)^3 \left(1 + e^{-2\beta + i \lambda} + e^{-4\beta + 2 i \lambda} \right)^6,$$The exponent in front will only keep the terms quadratic in $e^{i \lambda}$. The code below shows that this approach leads to the same result for partition function as the earlier approach.
b = symbols('e^{-\\beta}')
def f(x):
a = (1 + x + x**2) * ( (1 + b*x + (b*x)**2) ** 3 ) * ( (1 + b*b*x + (b*b*x)**2) **6 )
return a
fx = f(x).series(x, 0, 3).removeO()
pn = Poly(fx, x).coeffs()[0]
pn
The single $i$-th energy level grand-canonical partition functions $\zeta_i$ for classical particles (independent and indistinguishable), bosons and fermions are given below:
where $z = e^{\beta \mu}$ is the fugacity, and we define $Z_1 \equiv \sum_{i=0}^{i_{max}} e^{-\beta E_i}$. The full grand-canonical functions will be:
The two particle partition function is the term proportional to $z^2$. Notice, that for indistinguishable classical particles, 2-particle, partition function is $Z^2_1/2$. The code below verifies that we get the same 2-particle partition function for indistinguishable bosons as in the above canonical formalism.
In a special case, when $E_n = \epsilon n$, $E_{max} = \infty$, and $\mathcal{N}(E)=1$, we can rewrite bosonic partition function using partitions of integer:
$$Z^{\text{bosons}} = \prod_{E=0}^{\infty}\frac{1}{1 - z e^{-\beta E}} = \sum_{k=0}^{\infty}\sum_{n=0}^{\infty} p_k(n) ~z^k e^{-\beta \epsilon n} = \sum_{k=0}^{\infty} z^k \sum_{n=0}^{\infty} p_k(n) ~ e^{-\beta \epsilon n} $$$$Z^{\text{bosons}}_k = \sum_{n=0}^{\infty} p_k(n) ~ e^{-\beta \epsilon n} .$$This result seems to be very intuitive, since for $k$ bosons, we only want to find unordered partitions of energy $E_n$.
def f(x):
a = (1 - x) * ( (1 - b*x) **3 ) * ( (1 - b*b*x) **6 )
return 1/a
fx = f(x).series(x, 0, 3).removeO()
pn = Poly(fx, x).coeffs()[0]
pn
By "harmonic trap" we mean an isotropic 3D harmonic potential. Consider 5 bosons in the harmonic trap, but with a cutoff on the single-particle energies: $E_\sigma\leq 4$. The degeneracies of single particle energy states with energies $E = 0, 1, 2, 3, 4$ are ${\cal N}(E) = 1, 3, 6, 10, 15$ respectively. In total, there are (non empty) $34$ possible single-particles energy states (each state has 3 quantum numbers for $x, y, z$ coordinates, $(E_{x, i}, E_{y, i}, E_{z, i})$, such that, $E_i = E_{x, i} + E_{y, i} + E_{z, i} \leq 4$). We can label the state of each particle by $\sigma_i$, so that
Here $\sigma_i$ can be a number from $0$ to $34$, enumerated like in the example above, e.g., if $\sigma_i = 0$, quantum numbers are: $(0,0,0)$, if $\sigma_i=34$, quantum numbers are: $(4, 0, 0)$. On the other hand if $E=3$, $\sigma=10, \dots, 19$. The partition function for this system is given by
This partition function takes into account the bosonic nature of particles by taking only distinct (order is irrelevant) sets of five particles. In the following program, the average occupation number of the ground state per particle is calculated.
Energy = [0.0] + [1.0] * 3 + [2.0] * 6 + [3.0] * 10 + [4.0] * 15
beta = 1.0
n_states = 0
Z = 0.0
N0_mean = 0.0
E_mean = 0.0
s_max = 35
# we use the fact that particles are non distinguishable
for s_0 in range(s_max):
for s_1 in range(s_0, s_max):
for s_2 in range(s_1, s_max):
for s_3 in range(s_2, s_max):
for s_4 in range(s_3, s_max):
n_states += 1
state = [s_0, s_1, s_2, s_3, s_4]
E = sum([Energy[s] for s in state])
Z += np.exp(-beta * E)
E_mean += E * np.exp(-beta * E)
N0_mean += state.count(0) * np.exp(-beta * E)
print(n_states, Z, E_mean / Z / 5.0, N0_mean / Z / 5.0)
The number of states is: $(N_{states} + n_{bosons} - 1)!/n_{bosons}!(N_{states}-1)!$. That is why, when $n_{bosons}=5$ and $N_{states} = 35$, number of states is $575757$. Below, we convert the above code into a function.
def make_tuples(depth, n, start=0):
if depth == 0:
yield ()
else:
for x in range(start, n):
for t in make_tuples(depth - 1, n, x):
yield (x,) + t
@jit
def bosons_bounded_harmonic(beta, Emax, N):
""" Here Emax is the energy cutoff for each boson,
N is the number of bosons in a 3D trap."""
Energy = []
n_states_1p = 0 # initialise the total number of single trapped boson states
for n in range(Emax + 1):
degeneracy = int((n + 1) * (n + 2) / 2) # degeneracy in the 3D harmonic oscillator
Energy += [float(n)] * degeneracy
n_states_1p += degeneracy
n_states_Np = 0 # initialise the total number states of N trapped bosons
Z = 0.0 # initialise the partition function
N0_mean = 0.0
E_mean = 0.0
for state in make_tuples(N, n_states_1p):
n_states_Np += 1
E = 0
for s in state:
E += Energy[s] # calculate the total energy by above enumeration
# print(state, E)
Z += np.exp(-beta * E) # canonical partition function
E_mean += E * np.exp(-beta * E) # avg. total energy
N0_mean += state.count(0) * np.exp(-beta * E) # avg. ground level occupation number
return n_states_Np, Z, E_mean, N0_mean
%%time
Emax = 4
N = 5
beta = 1.0 # inverse temperature
n_states_Np, Z, E_mean, N0_mean = bosons_bounded_harmonic(beta, Emax, N)
print('Temperature:', 1 / beta, '\nTotal number of possible states:', n_states_Np, '\nPartition function:', Z,\
'\nAverage energy per particle:', E_mean / Z / float(N),\
'\nCondensate fraction (ground state occupation per particle):', N0_mean / Z / float(N))
%%time
cond_frac = []
temperature = []
for T in np.linspace(0.1, 1.8, 20):
n_states_Np, Z, E_mean, N0_mean = bosons_bounded_harmonic(1.0 / T, Emax, N)
cond_frac.append(N0_mean / Z / float(N))
temperature.append(T/float(N)**(1/3.0))
plt.rcParams['figure.figsize']=(12, 8) # set the figure size
plt.plot(temperature, cond_frac)
plt.title(f"Condensate fraction of $N$={N} bosons bounded in trap for" + " ($E_{max}$=" + f"{Emax})", fontsize = 22)
plt.xlabel('$T/N^{1/3}$', fontsize = 20)
plt.ylabel('$\\langle N_0 \\rangle$ / N', fontsize = 20)
plt.xlim(0, 1.2)
plt.ylim(0.1, 1.05)
plt.show()
Reminder:
The microcanonical ensemble consists of a collection of copies of a particular isolated system (one for each state accessible to it) at a particular fixed energy E and particle number N.
In canonical ensemble we consider the (closed) system in thermal contact with a reservoir at a temperature T. This ensemble contains copies of the system together with the reservoir (one copy for each allowed state of the combined system). In the canonical ensemble the E varies among members of the ensemble, while the N remains fixed.
The fact that at very low temperatures all particles are in the ground states is a simple consequence of Boltzmann statistics. In particular, at zero temperature all the particles populate the ground state. Bose-Einstein condensation is something else!
We have a Bose-Einstein condensation if a finite fraction of the system is in the ground-state for temperatures which are much higher than the gap between the ground-state and the first excited state (which is one, in our system). The temperature, where the particle number fraction significantly decreases is $T \sim N^{1/3}$. For large $N$ this temperature is much larger than the energy gap.
Bose-Einstein condensation occurs when all of a sudden a finite fraction of particles populate the single-particle ground state. In a trap, this happens at higher and higher temperatures as we increase the particle number.
Alternatively, in the example of 5 trapped bosons, we can characterize any single particle state $\sigma=0,\cdots,34$ by an occupation number $n_\sigma$. Using this occupation number representation, the energy is given by
$$E=n_0 E_0+\cdots + n_{34}E_{34},$$and the partition function is
$$Z_5(\beta)=\sum^{N=5}_{n_0=0}\cdots\sum^{N=5}_{n_{34}=0}e^{-\beta(n_0 E_0+\cdots + n_{34}E_{34})}\delta_{(n_0+\cdots + n_{34}),N=5}.$$We can generalize this for arbitrary $N$, and using integral representation of the Kronecker delta, we will have
$$Z_N(\beta) = \int_{-\pi}^\pi\frac{\text{d}\lambda}{2\pi}e^{-iN\lambda}\Pi_{j=0}^{j_\text{max}}\left(\sum_{n_j=0}^Ne^{n_j(-\beta E_j + i \lambda)}\right) = \int_{-\pi}^\pi\frac{\text{d}\lambda}{2\pi}e^{-iN\lambda}\Pi_{j=0}^{j_\text{max}}\left(\frac{1 - \Omega^{N+1}}{1 - \Omega}\right) \\= \int_{-\pi}^\pi\frac{\text{d}\lambda}{2\pi}e^{-iN\lambda}\prod_{E=0}^{E_\text{max}}[f_E(\beta,\lambda)]^{\mathcal{N}(E)}, $$where $E_{\text{max}}$ is the maximal allowed energy per boson ($E_{\text{max}}=4$ in the above example), $\mathcal{N}(E) = (E+1)(E+2)/2$, and
$$f_E(\beta,\lambda) = \left\{ \begin{array}{ll} \frac{1 - e^{i(N+1)\lambda}}{1 - e^{i\lambda}} & \mbox{if $E = 0$}\\ \frac{1}{1 - e^{-\beta E + i\lambda}} & \mbox{if $E > 0$ and $\beta EN \gg 1$} \end{array} \right.$$In this case, the average ground state occupation number can be determined as follows:
$$\langle N_0(\beta, \lambda) \rangle = \frac{1}{Z_N(\beta)}\int_{-\pi}^\pi\frac{\text{d}\lambda}{2\pi}e^{-iN\lambda}N_0(\beta, \lambda)\prod_{E=0}^{E_\text{max}}[f_E(\beta,\lambda)]^{\mathcal{N}(E)},$$where
$$N_0(\beta, \lambda) = -i\frac{\partial \log f_0}{\partial \lambda} = \left[-\frac{(N+1)e^{i \lambda (N+1)}}{1 - e^{i\lambda (N+1)}} + \frac{e^{i\lambda}}{1-e^{i\lambda}}\right],$$and $f_k = \sum_{n_k}e^{n_k(-\beta E_k + i\lambda)}$. Similarly,
$$\langle E(\beta, \lambda) \rangle = \frac{1}{Z_N(\beta)}\int_{-\pi}^\pi\frac{\text{d}\lambda}{2\pi}e^{-iN\lambda}E(\beta, \lambda)\prod_{E=0}^{E_\text{max}}[f_E(\beta,\lambda)]^{\mathcal{N}(E)},$$where
$$E(\beta, \lambda) = -\sum_k \frac{\partial \log f_k}{\partial \beta} = \sum_{E \neq 0} E \mathcal{N}(E)~\frac{e^{-\beta E + i \lambda}}{1 - e^{-\beta E + i \lambda}}.$$In the canonical ensemble the total number of particles is fixed. While computationally more involved than grand canonical ensemble, it is the preferred framework for computational simulations.
class CanonicalBosons:
def __init__(self, beta, N, Emax, n_points=1000, verbose=True):
self.beta = beta
self.N = N
self.Emax = Emax
self.n_points = n_points
self.verbose = verbose
self.lams = None
self.dlam = None
self.Z = None
self._initialize()
def _initialize(self):
self.lams = np.linspace(-np.pi, np.pi, self.n_points)
self.dlam = (self.lams[1] - self.lams[0]) / (2.0 * np.pi)
self.Z = np.real(self.z_part())
self.N0 = np.real(self.mean_num0())
self.Emean = np.real(self.mean_energy())
if self.verbose:
self.report()
@staticmethod
def _e(x):
return np.exp(1.0j * x)
@staticmethod
def N_states(x):
return (x + 1.0) * (x + 2.0) / 2.0
def f(self, lam, E):
a = self._e(lam)
b = self._e(lam * (self.N + 1))
if E == 0:
return (1.0 - b) / (1.0 - a)
elif E > 0:
c = np.exp(-self.beta * E)
return 1.0 / (1.0 - c * a)
def prod_fs(self, lam):
prod = 1.0
for E in range(self.Emax + 1):
prod *= self.f(lam, E) ** self.N_states(E)
return prod
def _calc_mean(self, fn):
return self.dlam * np.sum([self._e(- lam * self.N) * fn(lam) * self.prod_fs(lam) for lam in self.lams])
def z_part(self):
return self._calc_mean(lambda x: 1.0)
def num0(self, lam):
a = self._e(-lam)
b = self._e(-lam * (self.N + 1))
return (1.0 / (a - 1.0)) - ((self.N + 1) / (b - 1.0))
def mean_num0(self):
return self._calc_mean(lambda x: self.num0(x)) / self.Z
def energy(self, lam):
a = self._e(-lam)
eng = 0
for E in range(1, self.Emax + 1):
eng += self.N_states(E) * E / (a * np.exp(self.beta * E) - 1.0)
return eng
def mean_energy(self):
return self._calc_mean(lambda x: self.energy(x)) / self.Z
def report(self):
print(f'Temperature: {1.0 / self.beta:.7f},'
f'\nNumber of bosons: {self.N},'
f'\nSingle boson energy cutoff: {self.Emax},'
f'\nPartition function: {self.Z:.7f},'
f'\nGround state occupation per particle: {self.N0 / self.N:.7f},'
f'\nAverage energy per particle: {self.Emean / self.N:.7f}')
def __str__(self):
return f"<{type(self).__name__}: T={1.0/self.beta:.4f}, number of bosons N={self.N}, maximal energy Emax={self.Emax}>"
@njit
def canonical_bosons(beta, N, Emax, n_points=1000):
Z, Eavg, N0 = 0, 0, 0
lams = np.linspace(-np.pi, np.pi, n_points)
dlam = (lams[1] - lams[0])/(2 * np.pi)
for lam in lams:
a = np.exp(1.0j * lam)
b = a ** (N + 1)
f0 = (1.0 - b) / (1.0 - a)
n0 = (1.0 / (1.0/a - 1.0)) - ((N + 1) / (1.0/b - 1.0))
Zlam = f0
Elam = 0.0
for E in range(1, Emax+1):
NE = (E+1)*(E+2)/2
c = np.exp(-beta * E)
fE = 1.0 / (1.0 - c * a)
Zlam *= fE ** NE
Elam += NE * E * c * a / (1.0 - c * a)
Z += dlam * Zlam * (a ** (-N))
Eavg += dlam * Zlam * Elam * (a ** (-N))
N0 += dlam * Zlam * n0 * (a ** (-N))
Eavg /= Z
N0 /= Z
return np.real(N0)/N, np.real(Eavg)/N, np.real(Z)
def occup_num(beta, N, Emax):
cb = CanonicalBosons(beta, N, Emax, verbose=False)
return cb.N0/N, cb.Emean/N
%%time
Emax = 4
N = 5
cond_frac2 = []
av_eng_pp = []
Ts = np.linspace(0.1, 1.8, 20)
for T in Ts:
n, e = occup_num(1.0/T, N, Emax)
cond_frac2.append(n)
av_eng_pp.append(e)
%%time
Emax = 4
N = 5
cond_frac3 = []
av_eng_pp3 = []
Ts = np.linspace(0.1, 1.8, 20)
for T in Ts:
n, e, z = canonical_bosons(beta=1.0/T, N=N, Emax=Emax)
cond_frac3.append(n)
av_eng_pp3.append(e)
plt.rcParams['figure.figsize']=(8, 6) # set the figure size
ts = Ts/float(N)**(1/3.0)
plt.plot(temperature, cond_frac, 'b-x', lw=0.8, markersize=6, label='direct approach')
plt.plot(ts, cond_frac2, 'r-o', lw=0.5, markersize=4, label='analytic approach class')
plt.plot(ts, cond_frac3, 'g-.', lw=0.5, markersize=4, label='analytic approach func')
plt.title(f"Condensate fraction of $N$={N} bosons\n bounded in trap for" + " ($E_{max}$=" + f"{Emax})", fontsize = 22)
plt.xlabel('$T/N^{1/3}$', fontsize = 20)
plt.ylabel('$\\langle N_0 \\rangle$ / N', fontsize = 20)
plt.xlim(0, 1.2)
plt.ylim(0.1, 1.05)
plt.legend(loc="upper right")
plt.show()
%%time
Emax = 4
Tmax = 10
NumT= 50
Ts = np.linspace(0.1, 1.8, NumT)
Ns = [5, 25, 50, 100]
cond_frac_dict = {}
av_eng_pp_dict = {}
for N in Ns:
cond_frac_dict[N] = []
av_eng_pp_dict[N] = []
Ts = np.linspace(0.1, Tmax, NumT)
for T in Ts:
n, e, z = canonical_bosons(1.0/T, N, Emax)
cond_frac_dict[N].append(n)
av_eng_pp_dict[N].append(e)
print("N =", N)
fig = plt.figure(figsize=(14, 6))
ax1 = plt.subplot(1, 2, 1)
ax2 = plt.subplot(1, 2, 2)
for N in Ns:
ts = Ts /float(N)**(1/3.0)
ax1.plot(ts, cond_frac_dict[N], '-o', lw=0.5, label=f'$N=${N}')
ax2.plot(ts, av_eng_pp_dict[N], '-o', lw=0.5, label=f'$N=${N}')
ax1.hlines(0.5, 0, 2.2, colors='k', lw=2.0, label='half line')
ax1.set_title(f"Condensate fraction of $N$ bosons\n bounded in a trap for" + " ($E_{max}$=" + f"{Emax})", fontsize = 22)
ax1.set_xlabel('$T/N^{1/3}$', fontsize = 20)
ax1.set_ylabel('$\\langle N_0 \\rangle$ / N', fontsize = 20)
ax1.set_xlim(0, 2.2)
ax1.set_ylim(0, 1.05)
ax1.legend(loc="best")
ax2.set_title(f"Mean energy per particle for $N$ bosons\n bounded in a trap for" + " ($E_{max}$=" + f"{Emax})", fontsize = 22)
ax2.set_xlabel('$T/N^{1/3}$', fontsize = 20)
ax2.set_ylabel('$\\langle E \\rangle$ / N', fontsize = 20)
ax2.set_xlim(0, 2.2)
ax2.set_ylim(0, 3.05)
ax2.legend(loc="best")
plt.show()
%%time
Emax = 16
Tmax = 10
NumT= 50
Ts = np.linspace(0.1, Tmax, NumT)
Ns = [5, 25, 50, 100]
cond_frac_dict = {}
av_eng_pp_dict = {}
for N in Ns:
cond_frac_dict[N] = []
av_eng_pp_dict[N] = []
for T in Ts:
n, e, z = canonical_bosons(1.0/T, N, Emax)
cond_frac_dict[N].append(n)
av_eng_pp_dict[N].append(e)
print("N =", N)
fig = plt.figure(figsize=(14, 6))
ax1 = plt.subplot(1, 2, 1)
ax2 = plt.subplot(1, 2, 2)
for N in Ns:
ts = Ts /float(N)**(1/3.0)
ax1.plot(ts, cond_frac_dict[N], '-o', lw=0.5, label=f'$N=${N}')
ax2.plot(ts, av_eng_pp_dict[N], '-o', lw=0.5, label=f'$N=${N}')
ax1.hlines(0.5, 0, 2.2, colors='k', lw=2.0, label='half line')
ax1.set_title(f"Condensate fraction of $N$ bosons\n bounded in a trap for" + " ($E_{max}$=" + f"{Emax})", fontsize = 22)
ax1.set_xlabel('$T/N^{1/3}$', fontsize = 20)
ax1.set_ylabel('$\\langle N_0 \\rangle$ / N', fontsize = 20)
ax1.set_xlim(0, 1.2)
ax1.set_ylim(0, 1.05)
ax1.legend(loc="best")
ax2.set_title(f"Mean energy per particle for $N$ bosons\n bounded in a trap for" + " ($E_{max}$=" + f"{Emax})", fontsize = 22)
ax2.set_xlabel('$T/N^{1/3}$', fontsize = 20)
ax2.set_ylabel('$\\langle E \\rangle$ / N', fontsize = 20)
ax2.set_xlim(0, 1.2)
ax2.set_ylim(0, 9.05)
ax2.legend(loc="best")
plt.show()
%%time
N = 128
Tmax = 20
NumT= 100
Ts = np.linspace(0.1, Tmax, NumT)
Emaxs = [4, 8, 16, 24, 32, 64]
cond_frac_dict = {}
av_eng_pp_dict = {}
for Emax in Emaxs:
cond_frac_dict[Emax] = []
av_eng_pp_dict[Emax] = []
for T in Ts:
n, e, z = canonical_bosons(1.0/T, N, Emax)
cond_frac_dict[Emax].append(n)
av_eng_pp_dict[Emax].append(e)
print("Emax =", Emax)
fig = plt.figure(figsize=(14, 6))
ax1 = plt.subplot(1, 2, 1)
ax2 = plt.subplot(1, 2, 2)
for Emax in Emaxs:
ts = Ts /float(N)**(1/3.0)
cf = np.array(cond_frac_dict[Emax])
ae = np.array(av_eng_pp_dict[Emax])
cf[np.isnan(cf)] = 0
ae[np.isnan(ae)] = 0
ax1.plot(ts, cf, '-o', lw=0.5, label=f'$Emax=${Emax}')
ax2.plot(ts, ae, '-o', lw=0.5, label=f'$Emax=${Emax}')
ax1.hlines(0.5, 0, 2.2, colors='k', lw=2.0, label='half line')
ax1.set_title(f"Condensate fraction of $N$={N} bosons\n bounded in a trap for various" + " $E_{max}$", fontsize = 22)
ax1.set_xlabel('$T/N^{1/3}$', fontsize = 20)
ax1.set_ylabel('$\\langle N_0 \\rangle$ / N', fontsize = 20)
ax1.set_xlim(0, 1.01)
ax1.set_ylim(0, 1.05)
ax1.legend(loc="best")
ax2.set_title(f"Mean energy per particle for $N$={N} bosons\n bounded in a trap for various" + " $E_{max}$", fontsize = 22)
ax2.set_xlabel('$T/N^{1/3}$', fontsize = 20)
ax2.set_ylabel('$\\langle E \\rangle$ / N', fontsize = 20)
ax2.set_xlim(0, 1.01)
ax2.set_ylim(0, 13.05)
ax2.legend(loc="best")
plt.show()
from scipy.optimize import fsolve
from functools import partial
def num(T, N, Emax):
n, _, _ = canonical_bosons(1.0/T, N, Emax)
return n - 0.5
@jit
def canonical_bosons(beta, N, Emax, n_points=1000):
Z, Eavg, N0 = 0, 0, 0
lams = np.linspace(-np.pi, np.pi, n_points)
dlam = (lams[1] - lams[0])/(2 * np.pi)
for lam in lams:
a = np.exp(1.0j * lam)
b = a ** (N + 1)
f0 = (1.0 - b) / (1.0 - a)
n0 = (1.0 / (1.0/a - 1.0)) - ((N + 1) / (1.0/b - 1.0))
Zlam = f0
Elam = 0.0
for E in range(1, Emax+1):
NE = (E+1)*(E+2)/2
c = np.exp(-beta * E)
fE = 1.0 / (1.0 - c * a)
Zlam *= fE ** NE
Elam += NE * E * c * a / (1.0 - c * a)
Z += dlam * Zlam * (a ** (-N))
Eavg += dlam * Zlam * Elam * (a ** (-N))
N0 += dlam * Zlam * n0 * (a ** (-N))
Eavg /= Z
N0 /= Z
return np.real(N0)/N, np.real(Eavg)/N, np.real(Z)
%%time
Emaxs = [2, 4, 8, 16, 20, 32]
Ns = range(10, 100, 2)
T_crit_dict = {}
E_avg_dict = {}
for Emax in Emaxs:
T_crit_dict[Emax] = []
E_avg_dict[Emax] = []
for N in Ns:
f = partial(num, N=N, Emax=Emax)
starting_guess = np.array([1.0])
vals = fsolve(f, starting_guess)
n1, e1, _ = canonical_bosons(1.0/vals[0], N, Emax)
T_crit_dict[Emax].append(vals[0])
E_avg_dict[Emax].append(e1)
print("Emax =", Emax)
fig = plt.figure(figsize=(14, 6))
ax1 = plt.subplot(1, 2, 1)
ax2 = plt.subplot(1, 2, 2)
for Emax in Emaxs:
ax1.plot(Ns, T_crit_dict[Emax], '-o', lw=0.5, label=f'$Emax=${Emax}')
ax2.plot(Ns, E_avg_dict[Emax], '-o', lw=0.5, label=f'$Emax=${Emax}')
ax1.set_title("$T_{crit}$ vs N for various $E_{max}$", fontsize = 22)
ax1.set_xlabel('$N$', fontsize = 20)
ax1.set_ylabel('$T_{crit}$', fontsize = 20)
ax1.legend(loc="best")
ax2.set_title("$\\langle E_{crit} \\rangle$ / N vs N for various $E_{max}$", fontsize = 22)
ax2.set_xlabel('$N$', fontsize = 20)
ax2.set_ylabel('$\\langle E_{crit} \\rangle$ / N', fontsize = 20)
ax2.legend(loc="best")
ax2.set_ylim(0, 3.5)
plt.show()
from scipy.optimize import curve_fit
def fit_curve(x, a, b):
return a * (x ** b)
fig = plt.figure(figsize=(14, 6))
ax1 = plt.subplot(1, 2, 1)
ax2 = plt.subplot(1, 2, 2)
for Emax in Emaxs[:-1]:
popt, pcov = curve_fit(fit_curve, Ns, np.array(T_crit_dict[Emax]))
poptE, pcovE = curve_fit(fit_curve, Ns, np.array(E_avg_dict[Emax]))
ax1.plot(Ns, np.array(T_crit_dict[Emax]), 'o', lw=1.0, markersize=2, label=f'$Emax=${Emax}')
ax2.plot(Ns, np.array(E_avg_dict[Emax]), 'o', lw=1.0, markersize=2, label=f'$Emax=${Emax}')
ax1.plot(Ns, fit_curve(Ns, *popt), lw=1.0, label='fit: a=%5.3f, b=%5.3f' % tuple(popt))
ax2.plot(Ns, fit_curve(Ns, *poptE), lw=1.0, label='fit: a=%5.3f, b=%5.3f' % tuple(poptE))
ax1.set_title("$T_{crit}$ vs N for various $E_{max}$", fontsize = 22)
ax1.set_xlabel('$N$', fontsize = 20)
ax1.set_ylabel('$T_{crit}$', fontsize = 20)
ax1.legend(loc="best")
ax1.set_ylim(1, 9.5)
ax2.set_title("$\\langle E_{crit} \\rangle$ / N vs N for various $E_{max}$", fontsize = 22)
ax2.set_xlabel('$N$', fontsize = 20)
ax2.set_ylabel('$\\langle E_{crit} \\rangle$ / N', fontsize = 20)
ax2.legend(loc="best")
ax2.set_ylim(0.5, 3.0)
plt.show()
As we can see, $T_{crit}$ and $E_{crit}/N$ are approximate functions of $N^{1/3}$ for large values of $E_{max}$ and $N$. Later, we will show that $T_{crit} \sim N^{1/3}$ in the large-N limit. This will justify why we scaled temperature by this factor.
We will consider the $N \to \infty$ limit, in which case the integrals from the canonical ensemble can be calculated using saddle point approximation. It appears that the $\lambda$ saddle point corresponds to chemical potential (energy costs to introduce additional particle into the system).
We want to integrate the following, in the limit $N \to \infty$, using saddle point approximation,
$$Z_N(\beta) = \int_{-\pi}^\pi\frac{\text{d}\lambda}{2\pi}Z_N(\beta, \lambda) = \int_{-\pi}^\pi\frac{\text{d}\lambda}{2\pi}e^{-iN\lambda}\Pi_{j=0}^{j_\text{max}}\left(\sum_{n_j=0}^Ne^{n_j(-\beta E_j + i \lambda)}\right) = \int_{-\pi}^\pi\frac{\text{d}\lambda}{2\pi}e^{-iN\lambda}\prod_{E=0}^{E_\text{max}}[f_E(\beta,\lambda)]^{\mathcal{N}(E)}. $$Let's find the saddle points of the log of the integrand through the equation:
$$ -i\frac{\partial}{\partial \lambda} \left[-iN\lambda + \sum_{E=0}^{E_\text{max}}\mathcal{N}(E) \log\left(1 - e^{-\beta E + i \lambda}\right) \right] = 0 $$$$ \implies N = \sum_{E=0}^{E_\text{max}}\mathcal{N}(E) \frac{e^{-\beta E + i \lambda}}{1 - e^{-\beta E + i \lambda}} $$Writing $\lambda = \operatorname{Re}\lambda - i \beta \mu$, where $\mu$ will be shown to play the role of the chemical potential, the partition function at saddle point (where $\operatorname{Re}\lambda = 0$) will be:
$$Z_N(\beta, \mu) = \prod_{\text{single particle states} \ \sigma} \frac{1}{1 - e^{-\beta(E_{\sigma} - \mu)}} $$This equation describes at the same time the saddle point of the canonical partition function with independent states $\sigma$ with energies $E_{\sigma} - \mu$. This system is called the grand canonical ensemble. The saddle point of the canonical partition function, defines the partition function in the grand canonical ensemble, with fluctuating total particle number.
In grand canonical ensemble the probability of having $k$ particles in state $\sigma$ is:
$$\pi(N_{\sigma} = k) = \frac{e^{-\beta(E_{\sigma} - \mu)k}}{\sum_{N_{\sigma}=0}^{\infty}e^{-\beta(E_{\sigma} - \mu)N_{\sigma}}},$$therefore, the mean number of particles in state $\sigma$ is:
$$\langle N_{\sigma} \rangle = \frac{e^{-\beta(E_{\sigma} - \mu)}}{1 - e^{-\beta(E_{\sigma} - \mu)}},$$For the ground state, $E_0=0$ and $k=N_0$, we will have:
$$\pi(N_0) = e^{\beta\mu N_0} - e^{\beta\mu (N_0+1)}, \ \ \implies \ \ \langle N_{0} \rangle = \frac{e^{\beta \mu}}{1 - e^{\beta\mu}}.$$The mean total number of particles in the harmonic trap, in the grand canonical ensemble, is:
$$\langle N(\mu) \rangle = \sum_{E} \mathcal{N}(E)\frac{e^{-\beta(E - \mu)}}{1 - e^{-\beta(E - \mu)}},$$The chemical potential denotes the saddle point of the canonical partition function for $N$ particles. It is also the point in the grand canonical system at which $\langle N \rangle = N$.
To summarize, $Z_N(\beta, \lambda)$ when integrated over $\lambda$, gives the exact canonical partition function for $N$ particles. In the complex $\lambda$-plane, this function has a saddle point defined by a relation of $N$ with the chemical potential $\mu$. In the large-$N$ limit, this point dominates the integral for $Z_N$. The saddle point of the canonical ensemble also gives the partition function in the grand canonical ensemble.
@njit
def ZN(z, beta=1.0, N=5, Emax=1000):
prod = np.exp(-1.0j * N * z)
for E in range(0, Emax + 1):
NE = (E+1)*(E+2)/2.0 # for 3D
a = np.exp(-beta * E + 1.0j * z)
prod /= (1.0 - a) ** NE
return prod
%matplotlib inline
%matplotlib notebook
%pylab
xa, xb = -np.pi, np.pi
ya, yb = -1.0, 1.0
n = 60
x, y = np.mgrid[xa:xb:(1.0j * n), ya:yb:(1.0j * n)]
z = x + 1.0j * y
w = np.abs(ZN(z))
w[w > 1e3] = 0
fig = plt.figure(figsize=(12,8))
ax_real = fig.add_subplot(1,1,1, projection='3d')
ax_real.plot_surface(z.real, z.imag, w.real,
rstride=1, cstride=1, cmap=cm.jet)
ax_real.plot_wireframe(z.real, z.imag, w.real, rstride=2, cstride=2, alpha=0.4, lw=0.5)
ax_real.set_xlabel("Re($\lambda$)", fontsize=20, labelpad=20)
ax_real.set_ylabel("Im($\lambda$)", fontsize=20, labelpad=20)
ax_real.set_zlabel("$Z_N(\\beta, \lambda)$", fontsize=20, labelpad=20)
ax_real.set_title("Partition function $Z_{N=5}(\\beta=1, \lambda)$ in a complex plane", fontsize=25)
# # plot the imaginary part
# ax_imag = fig.add_subplot(1,2,2,projection='3d')
# ax_imag.plot_surface(z.real, z.imag, w.imag,
# rstride=1, cstride=1, cmap=cm.jet)
plt.show()
%matplotlib inline
%matplotlib notebook
%pylab
branching_number = 3
Nr = 25
Ntheta = 360
# compute the theta,R domain
theta = np.linspace(0, 2*np.pi * branching_number, Ntheta)
r = np.linspace(0, np.pi, Nr)
Theta, R = np.meshgrid(theta, r)
z = R*np.exp(1.0j*Theta)
# compute w^2 = z. THE KEY IDEA is to pass the exponentiation by 1/2 into exp().
# w = np.sqrt(R) * np.exp(1j*Theta/2)
w = np.log(R) + 1j * Theta
fig = plt.figure(figsize=(12,6))
# plot the real part
ax_real = fig.add_subplot(1,2,1,projection='3d')
ax_real.plot_surface(z.real, z.imag, w.real,
rstride=1, cstride=1, cmap=cm.jet, alpha=0.5)
ax_real.plot_wireframe(z.real, z.imag, w.real, rstride=2, cstride=2, alpha=0.2, lw=0.5)
ax_real.set_xlabel("Re(z)", fontsize=20, labelpad=20)
ax_real.set_ylabel("Im(z)", fontsize=20, labelpad=20)
ax_real.set_zlabel("$Re(f(z))$", fontsize=20, labelpad=20)
# plot the imaginary part
ax_imag = fig.add_subplot(1,2,2,projection='3d')
ax_imag.plot_surface(z.real, z.imag, w.imag,
rstride=1, cstride=1, cmap=cm.jet, alpha=0.5)
ax_imag.plot_wireframe(z.real, z.imag, w.imag, rstride=2, cstride=2, alpha=0.2, lw=0.5)
ax_imag.set_xlabel("Re(z)", fontsize=20, labelpad=20)
ax_imag.set_ylabel("Im(z)", fontsize=20, labelpad=20)
ax_imag.set_zlabel("$Im(f(z))$", fontsize=20, labelpad=20)
plt.show()
@jit
def f1(mu, targetN, beta, Emax):
Nm = 0
bmu = np.exp(-beta * mu)
for E in range(Emax + 1):
NE = (E+1)*(E+2)/2
bE = np.exp(beta * E)
Nm += NE / (bmu * bE - 1.0)
return Nm - targetN
@jit
def get_mu(N, beta, func=f1, Emax=9000):
f = partial(func, targetN=N, beta=beta, Emax=Emax)
starting_guess = np.array([-1e-4])
result = fsolve(f, x0=starting_guess, xtol=1e-18, maxfev=5000)
return result[0]
beta = 1/10.0
# assertion values to pick the correct initial guess
# Ns = [400, 800, 1000, 1200, 1400, 2000, 5000, 10000]
# mus = [-11.173318075200484, -4.828835689027581, -2.9284713153883755, -1.4806765761017484, -0.4293010203120327, -0.018694934432555214, -0.002831916100988416, -0.001172224854500701]
Ns = np.arange(100, 1000, 50)
mus = []
for n in Ns:
mu = get_mu(n, beta)
mus.append(mu)
print(mus)
%matplotlib inline
fig = plt.figure(figsize=(10,8))
plt.plot(Ns, mus)
plt.xlabel("$< N >$", fontsize=22)
plt.ylabel("$\mu$", fontsize=22)
plt.title("Chemical potential vs average number of trapped bosons", fontsize=22)
plt.show()
@njit
def mean_n0(mu, beta):
bmu = np.exp(-beta * mu)
return 1.0/(bmu - 1.0)
Tmax = 10
NumT= 20
Ts = np.linspace(0.1, Tmax, NumT)
Ns = [10**p for p in range(1, 5)][::-1]
cf_n0_dict = {}
for N in Ns:
cf_n0_dict[N] = []
for T in Ts:
beta = 1.0/T
mu = get_mu(N, beta)
mn0 = mean_n0(mu, beta)
cf_n0_dict[N].append(mn0/N)
fig = plt.figure(figsize=(10,8))
for N in Ns:
cf_n0s = cf_n0_dict[N]
ts = Ts /float(N)**(1/3.0)
plt.plot(ts, cf_n0s, '-o', markersize=5, lw=0.7, label=f"N={N}")
plt.xlabel('$T/N^{1/3}$', fontsize = 20)
plt.ylabel('$\\langle N_0 \\rangle$ / N', fontsize = 20)
plt.title("Condensate fraction vs scaled temperature", fontsize=22)
plt.legend(loc="upper right")
plt.xlim(0.1, 1.1)
plt.show()
Tmax = 10
NumT= 20
Ts = np.linspace(0.1, Tmax, NumT)
N = 100
Emax = 200
grand_cond_frac = []
can_cond_frac = []
for T in Ts:
beta = 1.0/T
mu = get_mu(N, beta)
mn0 = mean_n0(mu, beta)
grand_cond_frac.append(mn0/N)
n, _, _ = canonical_bosons(beta, N, Emax)
can_cond_frac.append(n)
cf = np.array(can_cond_frac)
cf[np.isnan(cf)] = 0
fig = plt.figure(figsize=(10,8))
ts = Ts /float(N)**(1/3.0)
plt.plot(ts, grand_cond_frac, '-x', markersize=5, lw=0.7, label=f"grand canonical")
plt.plot(ts, cf, '-o', lw=0.7, markersize=5, label=f'canonical')
plt.xlabel('$T/N^{1/3}$', fontsize = 20)
plt.ylabel('$\\langle N_0 \\rangle$ / N', fontsize = 20)
plt.title(f"Condensate fraction vs scaled temperature ($N=${N})", fontsize=22)
plt.legend(loc="upper right")
plt.xlim(0.1, 1.0)
plt.show()
Note that at low-T, chemical potential is negative and is close to zero. We can verify that the occupation of the first excited state changes very little near $\mu = 0$, where the total number of particles diverges. The ground-state occupation changes drastically, when $\mu \to 0$. This is why for all excited states, we may replace the actual occupation numbers of states by the values they would have at $\mu=0$.
Saturation number is the maximum mean number of particles that can be put into the excited states:
$$\langle N_{sat} \rangle = \sum_{E \geq 1} \mathcal{N}(E)\frac{e^{-\beta E}}{1 - e^{-\beta E}} = \sum_{E \geq 1} \langle N_{sat}(E) \rangle ,$$As N increases (and $\mu \to 0$) the true occupation numbers of the excited states approach the saturation numbers $\langle N_{sat}(E) \rangle$. Assume a trap is getting filled with particles. Initially, particles go into excited states until they are all saturated (at $T_c$). After saturation, any additional particles have no choice but to populate the ground state and to contribute to the Bose–Einstein condensate.
At low-$\beta$, we can substitute the saturation number sum with the integral ($x = \beta E$):
$$\langle N_{sat} \rangle = \sum_{E \geq 1} \frac{(E+1)(E+2)}{2} \frac{e^{-\beta E}}{1 - e^{-\beta E}} = \sum_{E \geq 1} \langle N_{sat}(E) \rangle_{\beta \to 0} \to \frac{1}{\beta^3}\int^{\infty}_0 d x ~x^2 \frac{1}{e^x - 1} = \frac{\zeta(3)}{\beta^3} \approx \frac{1.202}{\beta^3},$$Saturation numbers in the limit $N \to \infty$ allows us to determine the critical temperature of the ideal Bose gas. At the critical temperature, the saturation number equals the particle number
$$\langle N \rangle = \langle N_{sat} \rangle, \ \ \ T=T_c $$$$\langle N \rangle = \zeta(3)T^3_c \ \ \implies \ \ T_c = \frac{\langle N \rangle^{1/3}}{\zeta(3)^{1/3}} \approx 0.94\langle N \rangle^{1/3}$$Below $T_c$, the difference between the particle number and the saturation number must come from particles in the ground state, so that
$$\langle N_0 \rangle = \langle N \rangle - \langle N_{sat} \rangle \ \ \implies \ \ \frac{\langle N_0 \rangle}{\langle N \rangle} = 1 - \frac{T^3}{T_c^3} $$@njit
def mean_nk(mu, beta, k):
bmu = np.exp(-beta * (mu- k))
return 1.0/(bmu - 1.0)
T = 10
Ns= [2000, 3000, 4000, 5000, 6000, 7000, 8000, 9000, 10000]
mean_N1s = []
mean_N0s = []
mus = []
for N in Ns:
mu = get_mu(N, 1/T)
mN1 = mean_nk(mu, 1/T, 1)
mN0 = mean_nk(mu, 1/T, 0)
mean_N1s.append(mN1)
mean_N0s.append(mN0)
mus.append(mu)
plt.plot(mus, mean_N1s, 'r-x', lw=0.6)
plt.xlabel("$\mu$")
plt.ylabel("$<N_{\sigma=1}>$")
plt.ylim(9.3, 9.51)
plt.show()
plt.plot(mus, mean_N0s, 'b-o', lw=0.6)
plt.xlabel("$\mu$")
plt.ylabel("$<N_0>$")
plt.show()
N = 100
Tmax = 10
NumT= 20
Ts = np.linspace(0.1, Tmax, NumT)
ts = Ts /float(N)**(1/3.0)
cf_theory = 1 - (ts / 0.94) ** 3
#############################################
fig = plt.figure(figsize=(10,8))
plt.plot(ts, grand_cond_frac, '-x', markersize=5, lw=0.7, label=f"grand canonical")
plt.plot(ts, cf, '-o', lw=0.7, markersize=5, label=f'canonical')
plt.plot(ts, cf_theory, '--', label="$1 - T^3/T^3_c$")
plt.xlabel('$T/N^{1/3}$', fontsize = 20)
plt.ylabel('$\\langle N_0 \\rangle$ / N', fontsize = 20)
plt.title(f"Condensate fraction vs scaled temperature ($N=${N})", fontsize=22)
plt.legend(loc="upper right")
plt.xlim(0.1, 1.0)
plt.ylim(0.0, 1.1)
plt.show()
The Hamiltonian of the EM field in the second quantization picture can be represented as a sum of Hamiltonians of independent oscillators with energies $\omega(\vec{k}) = c |\vec{k}|$ and polarizations $\vec{\epsilon}$.
$$H_{EM} = \sum_{\mathbf{k}, \mathbf{\epsilon}} \hbar \omega(\mathbf{k}) \left(a^{\dagger}_{\mathbf{k},\mathbf{\epsilon}}a_{\mathbf{k},\mathbf{\epsilon}} + \frac{1}{2}\right) $$Consider EM radiation in a cubic box of size $L$ with conducting walls. The field modes (oscillators) will be standing waves described by the pairs $(\mathbf{k},\mathbf{\epsilon}) \sim (-\mathbf{k},\mathbf{\epsilon})$. For standing waves we need to have $2 L = n_i\lambda_i = 2\pi n_i/k_i$ or $\mathbf{k} = \pi \mathbf{n}/L$, where $\mathbf{n}$ is a vector of positive integers. Therefore, the number of modes with angular frequencies in the interval $(\omega, \omega + d\omega)$ is
$$d N(\omega) \approx 2 \times \frac{1}{8} \times 4\pi n^2 d n = \frac{V}{\pi^2 c^3}\omega^2 d\omega $$The radiation that consists of $N$ modes of frequency $\omega$ can be described by the following Fock state:
$$|n_1, n_2, \dots, n_N \rangle_{\omega} = \prod^{N}_{j=1} \frac{\left(a^{\dagger}_j\right)^{n_j}}{\sqrt{n_j!}}|0, 0, \dots, 0 \rangle ,$$where $j$-th mode has distinct $(\mathbf{k}_j,\mathbf{\epsilon}_j)$ but the same energy $\hbar \omega$, and $n_j$ is the number of photons in mode $j$ each with energy $\hbar \omega$ (not one photon with energy $n_j \times \hbar \omega$). In this system of $N$ oscillator modes, the total number of photons, $M$, and the total energy, $E$, are:
$$ M = \sum_j n_j \ , \ \ \ E = \hbar \omega \left(\frac{N}{2} + M\right) $$The grand canonical partition function for oscillator modes with frequency $\omega$ is:
$$Z_G = \prod_{N=0}^{\infty}\sum_{\{n_j\}} {}_{\omega}\langle n_1, n_2, \dots, n_N | e^{- \beta\hat{H} + \beta \mu \hat{N} } |n_1, n_2, \dots, n_N \rangle_{\omega} = \prod_{N=0}^{\infty}\sum_{\{n_j\}} {}_{\omega}\langle n_1, n_2, \dots, n_N | e^{- \beta \sum_j \left[\hbar \omega\left(\frac{1}{2} + n_j\right) - \mu n_j \right] } |n_1, n_2, \dots, n_N \rangle_{\omega} \\= \prod_{N=0}^{\infty}e^{-\beta N\hbar\omega/2}\left[\sum_{n_j=0}^{M} \langle n_j | e^{- \beta \left(\hbar \omega - \mu \right) n_j } |n_j \rangle \right]^N = \prod_{N=0}^{\infty}e^{-\beta N \hbar\omega/2} \left[\frac{1 - e^{- \beta \left(\hbar \omega - \mu\right)(M+1)}}{1 - e^{- \beta \left(\hbar \omega - \mu\right)}}\right]^N $$where $M = E/\hbar \omega - N/2$. Therefore, for a given number of oscillator modes:
$$Z_N = e^{-\beta N \hbar\omega/2} \left[\frac{1 - e^{- \beta \left(\hbar \omega - \mu\right)(M+1)}}{1 - e^{- \beta \left(\hbar \omega - \mu\right)}}\right]^N \approx e^{-\beta N \hbar\omega/2} \left[\frac{1}{1 - e^{- \beta \left(\hbar \omega - \mu\right)}}\right]^N $$The average number of photons and the average energy per mode are:
$$\bar{n}_{\omega} \equiv \frac{\bar{N}}{N} = \frac{1}{e^{\beta(\hbar \omega - \mu)} - 1}$$$$\bar{\epsilon}_{\omega} \equiv \frac{\bar{E}}{N} = \frac{1}{2}\hbar\omega + \frac{\hbar \omega}{e^{\beta(\hbar \omega - \mu)} - 1}$$Therefore, the spectral energy density is:
$$u_{\omega}(T) = \frac{V \hbar\omega^3}{\pi^2 c^3}\left[\frac{1}{2} + \frac{1}{e^{\beta(\hbar \omega - \mu)} - 1} \right]$$Consider a system of $N$ almost independent oscillators with total energy:
$$E = \frac{1}{2}N\hbar \omega + M \hbar \omega, \ \ \ M, N = 0, 1, 2, \dots $$Let the quantum number of the $i$-th oscillator be $n_i$, then:
$$n_1 + n_2 + \dots + n_N = M $$As it is clear from the previous appendix, $N$ is the number of distinct oscillator modes. Note, these are not bosons but multiparticle states, e.g. standing waves. On the other hand $n_i$ is the number of identical bosons in the $i$-th oscillator mode.
From simple combinatoric arguments, it follows that the thermodynamic weight, $W_M$, is equal to the number of ordered partitions (order matters) of $M$ into $N$ non negative integers:
$$W_M = {M+N-1 \choose N-1},$$and the entropy (in units $k_B = 1$) is:
$$S = \log W_M \approx (M+N) \log(M+N) - M\log M - N \log N.$$The temperature of such system is determined from
$$\frac{1}{T} = \frac{\partial S}{\partial E} = \frac{1}{\hbar \omega}\frac{\partial S}{\partial M} = \frac{1}{\hbar \omega}\log \left(\frac{M+N}{M}\right) = \frac{1}{\hbar \omega}\log \left(\frac{E/N + \frac{1}{2}\hbar \omega}{E/N - \frac{1}{2}\hbar \omega}\right).$$As a result:
$$E = N\left[\frac{1}{2}\hbar \omega + \frac{\hbar \omega}{e^{\hbar \omega/T} - 1}\right]$$Let a given oscillator has an energy $\epsilon_n = \left(n+\frac{1}{2}\right)\hbar \omega$, then the rest of the system will have an energy $E-\epsilon_n = \frac{N-1}{2}\hbar \omega + \left(M - n\right)\hbar \omega$. Therefore, the thermodynamic weight $W'_{M-n}$ of this remaining system would be:
$$W'_{M-n} = {M+N-n-2 \choose N-2},$$Therefore, the probability for one of the oscillators to have quantum number $n$ is:
$$P(n) = \frac{W'_{M-n}}{W_M} \approx \frac{M^n N}{(M+N)^{n+1}} = \frac{1}{1 + M/N}\left(\frac{M/N}{1 + M/N}\right)^n = \frac{e^{-\beta n \hbar \omega}}{e^{\beta \hbar \omega} - 1}$$The $W_M$ can be obtained using the language of partition functions. The partition function for a single harmonic oscillator is:
$$Z_1(\beta) = \sum_{n=0}^{\infty} e^{-\beta(n+ 1/2)\hbar \omega} = \frac{e^{-\beta \hbar \omega/2}}{1 - e^{-\beta \hbar \omega}}$$For a system of $N$ (almost independent) oscillators,
$$Z_N(\beta) = Z^N_1(\beta) = \frac{e^{-N\beta \hbar \omega/2}}{\left(1 - e^{-\beta \hbar \omega} \right)^N} = e^{-N\beta \hbar \omega/2} \sum_{M=0}^{\infty} {-N \choose M} (-1)^M e^{-M\beta \hbar \omega} \\= \sum_{M=0}^{\infty} {M+N-1 \choose N-1} e^{-\left(M + N/2 \right)\beta \hbar \omega} = \int e^{-\beta E} \Omega(E) \text{d}E$$$$\Omega(E) = {M+N-1 \choose N-1} \delta\left(E - \left[M + N/2 \right]\hbar \omega\right) $$The latter implies that there are $W(M) = {M+N-1 \choose N-1}$ states for each energy level $E = \left(M + N/2 \right)\hbar \omega$.
From mathematical point of view, as a problem of partitioning an integer, the generating function is:
$$f(x) = \left[\frac{1 - x^{M+1}}{1 - x}\right]^N $$which leads to the above result in the limit of large $M$.
Assume that oscillators (field modes) are bosons themselves, such that all microstates that are related by permutations are identical, e.g. microstates $(0, 2)$ and $(2, 0)$ are taken to be identical. This is possible, either in 1+1 space-time dimensions, or when the oscillators do not carry any other distinct quantum numbers. The standing waves on a segment are described by harmonics, $n$, such that $k = \pi n/L = \omega/c$. If there are $b$ polarizations, then each energy level has only $b$ degenerate states. In this case $W^{id}_M$ is the number of partitions of $M$ into distinct (order doesn't matter) at most $N$ parts: $W^{id}_M = p_{\leq N}(M)$. Now, since
$$Z^{id}_N(\beta) = e^{-Nb\beta \hbar \omega/2} \left(\sum_{M=0}^{\infty} p_{\leq N}(M) e^{-M\beta \hbar \omega}\right)^b = e^{-Nb\beta \hbar \omega/2} \prod_{k=1}^{N} \frac{1}{\left(1 - e^{-k\beta \hbar \omega}\right)^b},$$where we used the generating function for restricted partition. From the above partition function it follows that, the energy is:
$$E^{id} = \frac{Nb}{2}\hbar \omega + \sum_{k=1}^{N}\frac{bk\hbar\omega}{e^{k\hbar \omega/T} - 1}$$Notice, that the string partition function, ignoring the zero-point energy is:
$$Z^{\text{string}}(\beta) = \prod_{k=1}^{\infty} \frac{1}{\left(1 - e^{-k\beta \hbar \omega}\right)^b}.$$To summarize, we can observe that the system of identical oscillators (e.g. in 1+1 D), where the order in partition is irrelevant, is equivalent to a system of distinct oscillators with fundamental frequencies $k\hbar \omega$. The latter is equivalent to a system consisting of a single string. This is not surprising, since standing waves on a 1D segment are very much like a string vibration modes.
notebook_file = r"P9_Bose_Einstein_Statistics.ipynb"
html_converter(notebook_file)